Regularization Uses Fractal Priors
نویسنده
چکیده
paper shows that the ave rage or most likely (optima l) esti Many of the processing tasks arising in early vision involve the solution of ill-posed inverse problems. Two techniques that are often used to solve these inverse problems are reg ularization and Bayesian modeling. Regularization is used to find a solution that both fits the data and is also suffi ciently smooth. Bayesian modeling uses a statistical prior model of the field being estimated to determine an opti mal solution. One convenient way of specifying the prior model is to associate an energy function with each possi ble solution, and to use a Boltzmann distribution to relate the solution energy to its probability. This paper shows that regularization is an example of Bayesian modeling, and that using the regularization energy function for the surface interpolation problem results in a prior model that is fractal (self-affine over a range of seales). We derive an algorithm for generating typical (fractal) estimates from the posterior distribution. We also show how this algo rithm can be used to estimate the uncertainty associated with a regularized solution. and how this uncertainty can be used at later stages of processing.
منابع مشابه
Fractal Modeling of Natural Terrain: Analysis and Surface Reconstruction with Range Data
In this paper we address two issues in modeling natural terrain using fractal geometry: estimation of fractal dimension, and fractal surface reconstruction. For estimation of fractal dimension, we extend the fractal Brownian function approach to accommodate irregularly sampled data, and we develop methods for segmenting sets of points exhibiting self-similarity over only certain scales. For fra...
متن کاملInverse Problems and Self-similarity in Imaging
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes. In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularizatio...
متن کاملRemoving Motion Blur using Natural Image Statistics
We tackle deconvolution of motion blur in hand-held consumer photography with a Bayesian framework combining sparse gradient and color priors for regularization. We develop a closed-form optimization utilizing iterated re-weighted least squares (IRLS) with a Gaussian approximation of the regularization priors. The model parameters of the priors can be learned from a set of natural images which ...
متن کاملGaussian Markov Random Field Priors for Inverse Problems
In this paper, our focus is on the connections between the methods of (quadratic) regularization for inverse problems and Gaussian Markov random field (GMRF) priors for problems in spatial statistics. We begin with the most standard GMRFs defined on a uniform computational grid, which correspond to the oft-used discrete negative-Laplacian regularization matrix. Next, we present a class of GMRFs...
متن کاملBayesian shrinkage
Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimension...
متن کامل